# Ultra-Long Context Understanding
Llama 4 Maverick 17B 128E Instruct
Other
Llama 4 Maverick is a 17-billion-parameter multimodal Mixture of Experts (MoE) model from Meta, supporting 12 languages and image understanding, suitable for commercial and research applications.
Multimodal Fusion
Transformers Supports Multiple Languages

L
RedHatAI
29
1
Llama 4 Scout 17B 16E Instruct Bnb 8bit
Other
The Llama 4 series is a multimodal AI model developed by Meta, supporting text and image interaction, utilizing a Mixture of Experts (MoE) architecture, and demonstrating leading performance in text and image comprehension.
Multimodal Fusion
Transformers Supports Multiple Languages

L
bnb-community
132
1
Llama 4 Scout 17B 16E Unsloth
Other
Llama 4 Scout is a 17-billion-parameter multimodal AI model developed by Meta, featuring a Mixture of Experts architecture with support for 12 languages and image understanding.
Text-to-Image
Transformers Supports Multiple Languages

L
unsloth
67
1
Meta Llama Llama 4 Maverick 17B 128E Instruct
Other
Llama 4 Maverick is a multimodal AI model released by Meta, supporting text and image understanding. It adopts a Mixture of Experts (MoE) architecture and excels in multilingual text and code generation tasks.
Multimodal Fusion
Transformers Supports Multiple Languages

M
Undi95
35
2
Featured Recommended AI Models